复杂的系统在现实世界中无处不在,并且往往具有复杂且理解不足的动态。对于他们的控制问题,挑战是保证在这种肿的和陷入困境的环境中的准确性,鲁棒性和概括。幸运的是,复杂的系统可以分为人类认知似乎可以利用的多个模块化结构。受到一种新型控制方法的启发,提出了一种新颖的控制方法,是一种因果关系机制(CCMS),它提出了探索组合分裂和竞争的合作。我们的方法采用了层次强化学习理论(HRL),其中1)具有竞争意识的高级政策将整个复杂系统划分为多种功能机制,以及2)低级政策完成了每种机制的控制任务。特别是用于合作的级联控制模块有助于CCM的串联操作,并使用向前耦合的推理模块来恢复分区过程中丢失的耦合信息。在合成系统和现实世界的生物调节系统上,CCM方法即使有不可预测的随机噪声,CCM方法也可以达到稳健和最新的控制结果。此外,概括结果表明,重复使用准备的专业CCM有助于在具有不同混杂因素和动态的环境中表现良好。
translated by 谷歌翻译
多年来,使用单点监督的对象检测受到了越来越多的关注。在本文中,我们将如此巨大的性能差距归因于产生高质量的提案袋的失败,这对于多个实例学习至关重要(MIL)。为了解决这个问题,我们引入了现成建议方法(OTSP)方法的轻量级替代方案,从而创建点对点网络(P2BNET),该网络可以通过在中生成建议袋来构建一个互平衡的提案袋一种锚点。通过充分研究准确的位置信息,P2BNET进一步构建了一个实例级袋,避免了多个物体的混合物。最后,以级联方式进行的粗到精细政策用于改善提案和地面真相(GT)之间的IOU。从这些策略中受益,P2BNET能够生产出高质量的实例级袋以进行对象检测。相对于MS可可数据集中的先前最佳PSOD方法,P2BNET将平均平均精度(AP)提高了50%以上。它还证明了弥合监督和边界盒监督检测器之间的性能差距的巨大潜力。该代码将在github.com/ucas-vg/p2bnet上发布。
translated by 谷歌翻译
边界盒注释表单是可视对象本地化任务最常用的方法。然而,边界盒注释依赖于大量的精确注释的边界盒,这是昂贵的,艰苦的,因此在实际情况下是不可能的,对于某些应用而言,关心尺寸的一些应用甚至是多余的。因此,我们通过将每个人作为粗略点(COARSOPPOINT)向每个人提供注释来提出一种基于点的基于点的框架,该框架可以是对象范围内的任何点,而不是精确的边界框。然后将该人的位置预测为图像中的2D坐标。大大简化了数据注释管道。然而,COARSOUNTPOINT注释不可避免地导致标签可靠性降低(标签不确定性)和训练期间的网络混淆。因此,我们提出了一种点自我细化方法,它以自重节奏的方式迭代地更新点注释。拟议的细化系统减轻了标签不确定性,逐步提高了本地化绩效。实验表明,我们的方法可实现对象本地化性能,同时保存注释成本高达80 $ \%$。代码括在补充材料中。
translated by 谷歌翻译
使用现代智能手机摄像机的夜成像由于光子计数低和成像系统中不可避免的噪声而变得麻烦。直接调整曝光时间和ISO等级在弱光条件下无法同时获得锋利和无噪声图像。尽管已经提出了许多方法来增强嘈杂或模糊的夜晚图像,但由于两个主要原因,它们在现实世界中的照片仍然不令人满意:1)单个图像中的信息有限和2)合成训练图像和真实图像之间的域间隙 - 世界照片(例如,模糊区域和分辨率的差异)。为了利用连续的长期和短曝光图像中的信息,我们提出了一条基于学习的管道来融合它们。开发了D2HNET框架,以通过在短期曝光图像的指导下脱毛和增强长期暴露图像来恢复高质量的图像。为了缩小域间隙,我们利用了两相deblernet-enhancenet架构,该体系结构在固定的低分辨率上执行准确的模糊去除,以便能够在不同的分辨率输入中处理大范围模糊。此外,我们从HD视频中合成了D2数据,并在其上进行了实验。验证集和真实照片的结果表明,我们的方法获得了更好的视觉质量和最先进的定量分数。可以在https://github.com/zhaoyuzhi/d2hnet上找到D2HNET代码,模型和D2-DATASET。
translated by 谷歌翻译
深度学习的快速发展在分割方面取得了长足的进步,这是计算机视觉的基本任务之一。但是,当前的细分算法主要取决于像素级注释的可用性,这些注释通常昂贵,乏味且费力。为了减轻这一负担,过去几年见证了越来越多的关注,以建立标签高效,深度学习的细分算法。本文对标签有效的细分方法进行了全面的审查。为此,我们首先根据不同类型的弱标签提供的监督(包括没有监督,粗略监督,不完整的监督和嘈杂的监督和嘈杂的监督),首先开发出一种分类法来组织这些方法,并通过细分类型(包括语义细分)补充,实例分割和全景分割)。接下来,我们从统一的角度总结了现有的标签有效的细分方法,该方法讨论了一个重要的问题:如何弥合弱监督和密集预测之间的差距 - 当前的方法主要基于启发式先导,例如交叉像素相似性,跨标签约束,跨视图一致性,跨图像关系等。最后,我们分享了对标签有效深层细分的未来研究方向的看法。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译